Goto

Collaborating Authors

 northwestern university


Imaging-Based Mortality Prediction in Patients with Systemic Sclerosis

Peltekian, Alec K., Senkow, Karolina, Durak, Gorkem, Grudzinski, Kevin M., Bemiss, Bradford C., Dematte, Jane E., Richardson, Carrie, Markov, Nikolay S., Carns, Mary, Aren, Kathleen, Soriano, Alexandra, Dapas, Matthew, Perlman, Harris, Gundersheimer, Aaron, Selvan, Kavitha C., Varga, John, Hinchcliff, Monique, Warrior, Krishnan, Gao, Catherine A., Wunderink, Richard G., Budinger, GR Scott, Choudhary, Alok N., Esposito, Anthony J., Misharin, Alexander V., Agrawal, Ankit, Bagci, Ulas

arXiv.org Artificial Intelligence

Interstitial lung disease (ILD) is a leading cause of morbidity and mortality in systemic sclerosis (SSc). Chest computed tomography (CT) is the primary imaging modality for diagnosing and monitoring lung complications in SSc patients. However, its role in disease progression and mortality prediction has not yet been fully clarified. This study introduces a novel, large-scale longitudinal chest CT analysis framework that utilizes radiomics and deep learning to predict mortality associated with lung complications of SSc. We collected and analyzed 2,125 CT scans from SSc patients enrolled in the Northwestern Scleroderma Registry, conducting mortality analyses at one, three, and five years using advanced imaging analysis techniques. Death labels were assigned based on recorded deaths over the one-, three-, and five-year intervals, confirmed by expert physicians. In our dataset, 181, 326, and 428 of the 2,125 CT scans were from patients who died within one, three, and five years, respectively. Using ResNet-18, DenseNet-121, and Swin Transformer we use pre-trained models, and fine-tuned on 2,125 images of SSc patients. Models achieved an AUC of 0.769, 0.801, 0.709 for predicting mortality within one-, three-, and five-years, respectively. Our findings highlight the potential of both radiomics and deep learning computational methods to improve early detection and risk assessment of SSc-related interstitial lung disease, marking a significant advancement in the literature.


DeepSeek may have found a new way to improve AI's ability to remember

MIT Technology Review

An AI model released by the Chinese AI company DeepSeek uses new techniques that could significantly improve AI's ability to "remember." Released last week, the optical character recognition (OCR) model works by extracting text from an image and turning it into machine-readable words. This is the same technology that powers scanner apps, translation of text in photos, and many accessibility tools. OCR is already a mature field with numerous high-performing systems, and according to the paper and some early reviews, DeepSeek's new model performs on par with top models on key benchmarks. But researchers say the model's main innovation lies in how it processes information--specifically, how it stores and retrieves memories. Improving how AI models "remember" information could reduce the computing power they need to run, thus mitigating AI's large (and growing) carbon footprint.


Deep-Learning Control of Lower-Limb Exoskeletons via simplified Therapist Input

Vianello, Lorenzo, Lhoste, Clément, Küçüktabak, Emek Barış, Short, Matthew, Hargrove, Levi, Pons, Jose L.

arXiv.org Artificial Intelligence

Partial-assistance exoskeletons hold significant potential for gait rehabilitation by promoting active participation during (re)learning of normative walking patterns. Typically, the control of interaction torques in partial-assistance exoskeletons relies on a hierarchical control structure. These approaches require extensive calibration due to the complexity of the controller and user-specific parameter tuning, especially for activities like stair or ramp navigation. To address the limitations of hierarchical control in exoskeletons, this work proposes a three-step, data-driven approach: (1) using recent sensor data to probabilistically infer locomotion states (landing step length, landing step height, walking velocity, step clearance, gait phase), (2) allowing therapists to modify these features via a user interface, and (3) using the adjusted locomotion features to predict the desired joint posture and model stiffness in a spring-damper system based on prediction uncertainty. We evaluated the proposed approach with two healthy participants engaging in treadmill walking and stair ascent and descent at varying speeds, with and without external modification of the gait features through a user interface. Results showed a variation in kinematics according to the gait characteristics and a negative interaction power suggesting exoskeleton assistance across the different conditions.


A Computational Method for Measuring "Open Codes" in Qualitative Analysis

Chen, John, Lotsos, Alexandros, Zhao, Lexie, Wang, Caiyi, Hullman, Jessica, Sherin, Bruce, Wilensky, Uri, Horn, Michael

arXiv.org Artificial Intelligence

Qualitative analysis is critical to understanding human datasets in many social science disciplines. Open coding is an inductive qualitative process that identifies and interprets "open codes" from datasets. Yet, meeting methodological expectations (such as "as exhaustive as possible") can be challenging. While many machine learning (ML)/generative AI (GAI) studies have attempted to support open coding, few have systematically measured or evaluated GAI outcomes, increasing potential bias risks. Building on Grounded Theory and Thematic Analysis theories, we present a computational method to measure and identify potential biases from "open codes" systematically. Instead of operationalizing human expert results as the "ground truth," our method is built upon a team-based approach between human and machine coders. We experiment with two HCI datasets to establish this method's reliability by 1) comparing it with human analysis, and 2) analyzing its output stability. We present evidence-based suggestions and example workflows for ML/GAI to support open coding.


IPMN Risk Assessment under Federated Learning Paradigm

Pan, Hongyi, Hong, Ziliang, Durak, Gorkem, Keles, Elif, Aktas, Halil Ertugrul, Taktak, Yavuz, Medetalibeyoglu, Alpay, Zhang, Zheyuan, Velichko, Yury, Spampinato, Concetto, Schoots, Ivo, Bruno, Marco J., Tiwari, Pallavi, Bolan, Candice, Gonda, Tamas, Miller, Frank, Keswani, Rajesh N., Wallace, Michael B., Xu, Ziyue, Bagci, Ulas

arXiv.org Artificial Intelligence

Accurate classification of Intraductal Papillary Mucinous Neoplasms (IPMN) is essential for identifying high-risk cases that require timely intervention. In this study, we develop a federated learning framework for multi-center IPMN classification utilizing a comprehensive pancreas MRI dataset. This dataset includes 653 T1-weighted and 656 T2-weighted MRI images, accompanied by corresponding IPMN risk scores from 7 leading medical institutions, making it the largest and most diverse dataset for IPMN classification to date. We assess the performance of DenseNet-121 in both centralized and federated settings for training on distributed data. Our results demonstrate that the federated learning approach achieves high classification accuracy comparable to centralized learning while ensuring data privacy across institutions. This work marks a significant advancement in collaborative IPMN classification, facilitating secure and high-accuracy model training across multiple centers.


What are digital arrests, the newest deepfake tool used by cybercriminals?

Al Jazeera

An Indian textile baron has revealed that he was duped out of 70 million rupees ( 833,000) by online scammers impersonating federal investigators and even the Supreme Court chief justice. The fraudsters posing as officers from India's Central Bureau of Investigation (CBI) called SP Oswal, chairman and managing director of the textile manufacturer Vardhman, on August 28 and accused him of money laundering. For the next two days, Oswal was under digital surveillance as he was ordered to keep Skype open on his phone 24/7 during which he was interrogated and threatened with arrest. The fraudsters also conducted a fake virtual court hearing with a digital impersonation of Chief Justice of India DY Chandrachud as the judge. Oswal paid the amount after the court verdict via Skype without realising that he was the latest victim of an online scam using a new modus operandi, called "digital arrest".


Haptic Transparency and Interaction Force Control for a Lower-Limb Exoskeleton

Küçüktabak, Emek Barış, Wen, Yue, Kim, Sangjoon J., Short, Matthew, Ludvig, Daniel, Hargrove, Levi, Perreault, Eric, Lynch, Kevin, Pons, Jose

arXiv.org Artificial Intelligence

Controlling the interaction forces between a human and an exoskeleton is crucial for providing transparency or adjusting assistance or resistance levels. However, it is an open problem to control the interaction forces of lower-limb exoskeletons designed for unrestricted overground walking. For these types of exoskeletons, it is challenging to implement force/torque sensors at every contact between the user and the exoskeleton for direct force measurement. Moreover, it is important to compensate for the exoskeleton's whole-body gravitational and dynamical forces, especially for heavy lower-limb exoskeletons. Previous works either simplified the dynamic model by treating the legs as independent double pendulums, or they did not close the loop with interaction force feedback. The proposed whole-exoskeleton closed-loop compensation (WECC) method calculates the interaction torques during the complete gait cycle by using whole-body dynamics and joint torque measurements on a hip-knee exoskeleton. Furthermore, it uses a constrained optimization scheme to track desired interaction torques in a closed loop while considering physical and safety constraints. We evaluated the haptic transparency and dynamic interaction torque tracking of WECC control on three subjects. We also compared the performance of WECC with a controller based on a simplified dynamic model and a passive version of the exoskeleton. The WECC controller results in a consistently low absolute interaction torque error during the whole gait cycle for both zero and nonzero desired interaction torques. In contrast, the simplified controller yields poor performance in tracking desired interaction torques during the stance phase.


Energy-efficient transistor could allow smartwatches to use AI

New Scientist

A reconfigurable transistor can run AI processes using 100 times less electricity than the standard transistors found in silicon-based chips. It could help spur development of a new generation of smartwatches or other wearable devices capable of using powerful AI technology – something that is impractical today because many AI algorithms would rapidly drain the batteries of wearables built with ordinary transistors. The new transistors are made of molybdenum disulphide and carbon nanotubes. They can be continuously reconfigured by electric fields to almost instantaneously handle multiple steps in AI-driven processes. In contrast, silicon-based transistors – which act as tiny on-or-off electronic switches – can only perform one step at a time.


Microelectronics give researchers a remote control for biological robots

#artificialintelligence

Then, they saw the light. Now, miniature biological robots have gained a new trick: remote control. The hybrid "eBiobots" are the first to combine soft materials, living muscle and microelectronics, said researchers at the University of Illinois Urbana-Champaign, Northwestern University and collaborating institutions. They described their centimeter-scale biological machines in the journal Science Robotics. "Integrating microelectronics allows the merger of the biological world and the electronics world, both with many advantages of their own, to now produce these electronic biobots and machines that could be useful for many medical, sensing and environmental applications in the future," said study co-leader Rashid Bashir, an Illinois professor of bioengineering and dean of the Grainger College of Engineering.


Microelectronics give researchers a remote control for biological robots

Robohub

A photograph of an eBiobot prototype, lit with blue microLEDs. Remotely controlled miniature biological robots have many potential applications in medicine, sensing and environmental monitoring. Then, they saw the light. Now, miniature biological robots have gained a new trick: remote control. The hybrid "eBiobots" are the first to combine soft materials, living muscle and microelectronics, said researchers at the University of Illinois Urbana-Champaign, Northwestern University and collaborating institutions.